67 research outputs found

    Reviewing Traffic ClassificationData Traffic Monitoring and Analysis

    Get PDF
    Traffic classification has received increasing attention in the last years. It aims at offering the ability to automatically recognize the application that has generated a given stream of packets from the direct and passive observation of the individual packets, or stream of packets, flowing in the network. This ability is instrumental to a number of activities that are of extreme interest to carriers, Internet service providers and network administrators in general. Indeed, traffic classification is the basic block that is required to enable any traffic management operations, from differentiating traffic pricing and treatment (e.g., policing, shaping, etc.), to security operations (e.g., firewalling, filtering, anomaly detection, etc.). Up to few years ago, almost any Internet application was using well-known transport layer protocol ports that easily allowed its identification. More recently, the number of applications using random or non-standard ports has dramatically increased (e.g. Skype, BitTorrent, VPNs, etc.). Moreover, often network applications are configured to use well-known protocol ports assigned to other applications (e.g. TCP port 80 originally reserved for Web traffic) attempting to disguise their presence. For these reasons, and for the importance of correctly classifying traffic flows, novel approaches based respectively on packet inspection, statistical and machine learning techniques, and behavioral methods have been investigated and are becoming standard practice. In this chapter, we discuss the main trend in the field of traffic classification and we describe some of the main proposals of the research community. We complete this chapter by developing two examples of behavioral classifiers: both use supervised machine learning algorithms for classifications, but each is based on different features to describe the traffic. After presenting them, we compare their performance using a large dataset, showing the benefits and drawback of each approac

    Issues and future directions in traffic classification

    Full text link

    Chocolatine: Outage Detection for Internet Background Radiation

    Full text link
    The Internet is a complex ecosystem composed of thousands of Autonomous Systems (ASs) operated by independent organizations; each AS having a very limited view outside its own network. These complexities and limitations impede network operators to finely pinpoint the causes of service degradation or disruption when the problem lies outside of their network. In this paper, we present Chocolatine, a solution to detect remote connectivity loss using Internet Background Radiation (IBR) through a simple and efficient method. IBR is unidirectional unsolicited Internet traffic, which is easily observed by monitoring unused address space. IBR features two remarkable properties: it is originated worldwide, across diverse ASs, and it is incessant. We show that the number of IP addresses observed from an AS or a geographical area follows a periodic pattern. Then, using Seasonal ARIMA to statistically model IBR data, we predict the number of IPs for the next time window. Significant deviations from these predictions indicate an outage. We evaluated Chocolatine using data from the UCSD Network Telescope, operated by CAIDA, with a set of documented outages. Our experiments show that the proposed methodology achieves a good trade-off between true-positive rate (90%) and false-positive rate (2%) and largely outperforms CAIDA's own IBR-based detection method. Furthermore, performing a comparison against other methods, i.e., with BGP monitoring and active probing, we observe that Chocolatine shares a large common set of outages with them in addition to many specific outages that would otherwise go undetected.Comment: TMA 201

    Gaining insight into AS-level outages through analysis of internet background radiation

    Full text link
    Abstract—Internet Background Radiation (IBR) is unsolicited network traffic mostly generated by malicious software, e.g., worms, scans. In previous work, we extracted a signal from IBR traffic arriving at a large (/8) segment of unassigned IPv4 address space to identify large-scale disruptions of connectivity at an Autonomous System (AS) granularity, and used our technique to study episodes of government censorship and natural disasters [1]. Here we explore other IBR-derived metrics that may provide insights into the causes of macroscopic connectivity disruptions. We propose metrics indicating packet loss (e.g., due to link congestion) along a path from a specific AS to our observation point. We use three case studies to illustrate how our metrics can help identify packet loss characteristics of an outage. These metrics could be used in the diagnostic component of a semi-automated system for detecting and characterizing large-scale outages. I

    At Home and Abroad: The Use of Denial-of-service Attacks during Elections in Nondemocratic Regimes

    Get PDF
    In this article, we study the political use of denial-of-service (DoS) attacks, a particular form of cyberattack that disables web services by flooding them with high levels of data traffic. We argue that websites in nondemocratic regimes should be especially prone to this type of attack, particularly around political focal points such as elections. This is due to two mechanisms: governments employ DoS attacks to censor regime-threatening information, while at the same time, activists use DoS attacks as a tool to publicly undermine the government’s authority. We analyze these mechanisms by relying on measurements of DoS attacks based on large-scale Internet traffic data. Our results show that in authoritarian countries, elections indeed increase the number of DoS attacks. However, these attacks do not seem to be directed primarily against the country itself but rather against other states that serve as hosts for news websites from this country.publishe

    When Parents and Children Disagree:Diving into DNS Delegation Inconsistency

    Get PDF
    The Domain Name System (DNS) is a hierarchical, decentralized, and distributed database. A key mechanism that enables the DNS to be hierarchical and distributed is delegation [7] of responsibility from parent to child zones—typically managed by different entities. RFC1034 [12] states that authoritative nameserver (NS) records at both parent and child should be “consistent and remain so”, but we find inconsistencies for over 13M second-level domains. We classify the type of inconsistencies we observe, and the behavior of resolvers in the face of such inconsistencies, using RIPE Atlas to probe our experimental domain configured for different scenarios. Our results underline the risk such inconsistencies pose to the availability of misconfigured domains

    BGPStream:A software framework for live and historical BGP data analysis

    Get PDF
    We present BGPStream, an open-source software frame-work for the analysis of both historical and real-Time Border Gateway Protocol (BGP) measurement data. Although BGP is a crucial operational component of the Internet infrastructure, and is the subject of research in the areas of Internet performance, security, topol-ogy, protocols, economics, etc., there is no efficient way of processing large amounts of distributed and/or live BGP measurement data. BGPStream fills this gap, en-abling efficient investigation of events, rapid prototyp-ing, and building complex tools and large-scale monitor-ing applications (e.g., detection of connectivity disrup-tions or BGP hijacking attacks). We discuss the goals and architecture of BGPStream. We apply the compo-nents of the framework to different scenarios, and we describe the development and deployment of complex services for global Internet monitoring that we built on top of it
    • …
    corecore